Goto

Collaborating Authors

 liability system


Toward a Unified Security Framework for AI Agents: Trust, Risk, and Liability

Mo, Jiayun, Kang, Xin, Li, Tieyan, Lei, Zhongding

arXiv.org Artificial Intelligence

The excitement brought by the development of AI agents came alongside arising problems. These concerns centered around users' trust issues towards AIs, the risks involved, and the difficulty of attributing responsibilities and liabilities. Current solutions only attempt to target each problem separately without acknowledging their inter-influential nature. The Trust, Risk and Liability (TRL) framework proposed in this paper, however, ties together the interdependent relationships of trust, risk, and liability to provide a systematic method of building and enhancing trust, analyzing and mitigating risks, and allocating and attributing liabilities. It can be applied to analyze any application scenarios of AI agents and suggest appropriate measures fitting to the context. The implications of the TRL framework lie in its potential societal impacts, economic impacts, ethical impacts, and more. It is expected to bring remarkable values to addressing potential challenges and promoting trustworthy, risk-free, and responsible usage of AI in 6G networks.


Who Is Liable When AI Kills?

#artificialintelligence

Who is responsible when AI harms someone? A California jury may soon have to decide. In December 2019, a person driving a Tesla with an artificial intelligence driving system killed two people in Gardena in an accident. The Tesla driver faces several years in prison. In light of this and other incidents, both the National Highway Transportation Safety Administration (NHTSA) and National Transportation Safety Board are investigating Tesla crashes, and NHTSA has recently broadened its probe to explore how drivers interact with Tesla systems.


To Spur Growth in AI, We Need a New Approach to Legal Liability

#artificialintelligence

Artificial intelligence (AI) is sweeping through industries ranging from cybersecurity to environmental protection -- and the Covid-19 pandemic has only accelerated this trend. AI may improve the lives of millions, but it also will inevitably cause accidents that injure people or parties -- indeed, it already has through incidents like autonomous vehicle crashes. An outdated liability system in the United States and other countries, however, is unable to manage these risks, which is a problem because those risks can impede AI innovations and adoption. Therefore, it is crucial that we reform the liability system. Doing so will help speed AI innovations and adoption.


Exploring the Ethics Behind Self-Driving Cars

#artificialintelligence

Imagine a runaway trolley barreling down on five people standing on the tracks up ahead. You can pull a lever to divert the trolley onto a different set of tracks where only one person is standing. Is the moral choice to do nothing and let the five people die? Or should you hit the switch and therefore actively participate in a different person's death? In the real world, the "trolley problem" first posed by philosopher Philippa Foot in 1967 is an abstraction most won't ever have to actually face. And yet, as driverless cars roll into our lives, policymakers and auto manufacturers are edging into similar ethical dilemmas.